Leveraging convergence behavior to balance conflicting tasks in multi-task learning

نویسندگان

چکیده

Multi-Task Learning is a learning paradigm that uses correlated tasks to improve performance generalization. A common way learn multiple through the hard parameter sharing approach, in which single architecture used share same subset of parameters, creating an inductive bias between them during training process. Due its simplicity, potential generalization, and reduce computational cost, it has gained attention scientific industrial communities. However, often conflict with each other, makes challenging define how gradients should be combined allow simultaneous learning. To address this problem, we use idea multi-objective optimization propose method takes into account temporal behaviour create dynamic adjusts importance task backpropagation. The result give more are diverging or not being benefited last iterations, ensuring heading maximization all tasks. As result, empirically show proposed outperforms state-of-the-art approaches on conflicting Unlike adopted baselines, our ensures reach good generalization performances.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Exploiting Unrelated Tasks in Multi-Task Learning

We study the problem of learning a group of principal tasks using a group of auxiliary tasks, unrelated to the principal ones. In many applications, joint learning of unrelated tasks which use the same input data can be beneficial. The reason is that prior knowledge about which tasks are unrelated can lead to sparser and more informative representations for each task, essentially screening out ...

متن کامل

Exploiting Unrelated Tasks in Multi-Task Learning

We study the problem of learning a group of principal tasks using a group of auxiliary tasks, unrelated to the principal ones. In many applications, joint learning of unrelated tasks which use the same input data can be beneficial. The reason is that prior knowledge about which tasks are unrelated can lead to sparser and more informative representations for each task, essentially screening out ...

متن کامل

Leveraging Qualitative Reasoning to Learning Manipulation Tasks

Learning and planning are powerful AI methods that exhibit complementary strengths. While planning allows goal-directed actions to be computed when a reliable forward model is known, learning allows such models to be obtained autonomously. In this paper we describe how both methods can be combined using an expressive qualitative knowledge representation. We argue that the crucial step in this i...

متن کامل

Using Hierarchical Reinforcement Learning to Balance Conflicting Sub-Problems

This paper describes the adaption and application of an algorithm called Feudal Reinforcement Learning to a complex gridworld navigation problem. The algorithm proved to be not easily adaptable and the results were unsatisfactory.

متن کامل

Multi-task Learning with Labeled and Unlabeled Tasks

In multi-task learning, a learner is given a collection of prediction tasks and needs to solve all of them. In contrast to previous work, which required that annotated training data is available for all tasks, we consider a new setting, in which for some tasks, potentially most of them, only unlabeled training data is provided. Consequently, to solve all tasks, information must be transferred b...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Neurocomputing

سال: 2022

ISSN: ['0925-2312', '1872-8286']

DOI: https://doi.org/10.1016/j.neucom.2022.09.042